Imagine trying to train a new trick to a dog which involves many steps. If you only rewarded the complete set of correct steps, it is unlikely the dog would ever learn them. Instead one creates a more continuous reward structure, giving small rewards for single steps and larger rewards the more steps are completted. This means that small imporovements in behaviour get small rewards and are hence reinforced.
The same is true of learning more generally, and so when preparing problems for machine learning it can be helpul to ensure there is some degree of continuity of learning. For example, in a constraint satisfaction tasks one might use soft constraints so that given a constraint "A=10", the fitness function would give some reward for values close to 10. Often one adjusts the level of softenss of constraints, making the fitness function gradually less forgiving, just as one would praise a toddler for first steps, but expect more from an elite athlete!
The sigmoid activation function is used as a soft threshold in backpropagation is an important example of how continuity revolutionised a whole class of algorithms.
Used on page 192